منابع مشابه
Optimization with Sparsity-Inducing Penalties
Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appr...
متن کاملGraph-Based Lexicon Expansion with Sparsity-Inducing Penalties
We present novel methods to construct compact natural language lexicons within a graphbased semi-supervised learning framework, an attractive platform suited for propagating soft labels onto new natural language types from seed data. To achieve compactness, we induce sparse measures at graph vertices by incorporating sparsity-inducing penalties in Gaussian and entropic pairwise Markov networks ...
متن کاملCMSC702 Sparsity with L1 penalties
Although the least squares estimate is the linear unbiased estimate with minimum variance, it is possible that a biased estimate will give us a better mean squared error. Consider a case where the true model is Y = β0 + β1X1 + β2X2 + and that X1 and X2 are almost perfectly correlated (statisticians say X1 and X2 are co-linear). What happens if we leave X2 out? Then the model is very well approx...
متن کاملSTRUCTURED VARIABLE SELECTION WITH SPARSITY-INDUCING NORMS Structured Variable Selection with Sparsity-Inducing Norms
We consider the empirical risk minimization problem for linear supervised learning, with regularization by structured sparsity-inducing norms. These are defined as sums of Euclidean norms on certain subsets of variables, extending the usual l1-norm and the group l1-norm by allowing the subsets to overlap. This leads to a specific set of allowed nonzero patterns for the solutions of such problem...
متن کاملGap Safe Screening Rules for Sparsity Enforcing Penalties
In high dimensional regression settings, sparsity enforcing penalties have proved useful to regularize the data-fitting term. A recently introduced technique called screening rules propose to ignore some variables in the optimization leveraging the expected sparsity of the solutions and consequently leading to faster solvers. When the procedure is guaranteed not to discard variables wrongly the...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Computational and Graphical Statistics
سال: 2019
ISSN: 1061-8600,1537-2715
DOI: 10.1080/10618600.2019.1637749